AI Models Show Cognitive Decline When Trained on Viral Social Media Content
Large language models exposed to viral social media data exhibit measurable cognitive decay, according to new research from the University of Texas at Austin, Texas A&M University, and Purdue University. Dubbed 'LLM brain rot,' the phenomenon mirrors the 'Dead Internet' theory, evolving into a 'Zombie Internet' where AI systems retain functionality but suffer declining coherence.
Researchers constructed two datasets from Twitter: one optimized for engagement with viral posts, the other comprising longer, factual content. Retraining open models like LLaMA and Qwen on these datasets revealed stark differences. Models trained exclusively on viral data saw reasoning accuracy drop from 74.9 to 57.2 on the ARC-Challenge benchmark, while long-context comprehension scores plummeted from 84.4 to 52.3.
The decay follows a distinct pattern. Affected models increasingly skip intermediate reasoning steps—a behavior termed 'thought skipping'—producing shorter, less structured responses with more factual errors. The study suggests a mechanistic attention deficit emerges as exposure to low-quality content rises, raising concerns about AI's future reliability if trained on deteriorating online data.